perm filename RATION[S86,JMC]1 blob
sn#814419 filedate 1986-04-04 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 ration[s86,jmc] The rationalist approach to AI
C00012 ENDMK
Cā;
ration[s86,jmc] The rationalist approach to AI
In their various ways, all three of Winograd, Searle and
the Dreyfus's are attacking rationalism. I have commented
separately on their essays, but I would like to enunciate a
doctrine about AI that bears some resemblance to what they are
attacking. In its present form it recognize that some of the
phenomena the Dreyfus's and Winograd have mentioned present
real problems that AI systems must solve.
I shall begin with the programme of my 1958 "Programs
with Common Sense" (McCarthy 1960) and then modify it.
It was proposed to represent a robot's information about
the world as sentences of mathematical logic. This information
would include both general information about the world, including
especially general information about the effects of actions and
other events. It also includes information obtained through the
robots sense organs about the particular situation.
Qualification 1. It was explicitly mentioned in that paper
that certain information, especially pictures, would be too longwinded
to represent pixel by pixel as sentences and would be represented
by any convenient data structure.
Goals and a general principle that the robot should do
what was likely to achieve its goals were also to be represented
by sentences. A subset of the sentences in the memory of the
machine were in a small database corresponding to consciousness.
The program would attempt to deduce from the sentences
in consciousness a sentence of the form should(<action>). When
it did, it would do the action. The actions included physical
and mental actions. The latter included modifying consciousness
by getting more information from memory, forgetting some
information, making observations both of the outside world
and the machine's own memory. Computations with the data
not represented by sentences would be possible actions and
could result in sentences.
There was an example of deducing from data in consciousness that
a certain two step plan would succeed in getting to the airport.
Beyond this details were not given, because they weren't available.
Nothing was said about how long all this would take. My opinion was
that conceptual problems remained to be solved, but I certainly expected
faster progress than has occurred.
I will refer to this approach and sufficiently similar
proposals by others as the "reasoning program" approach. I don't
claim that it is the only workable approach to AI. However, its
variants have had the most success so far.
While many attempts have been made to realize the plan
of "Programs with Common Sense" or variants of it, I have never
felt that the conceptual problems had been solved well enough
for my own next step to be an implementation. Nevertheless,
progress has been made both in implementing systems meeting
part of the goals and in developing improved concepts. Some
of the progress, e.g. STRIPS and Microplanner, involved putting
more information in the program or in production rules, thus
avoiding some of the combinatorial difficulties of a pure
logic approach with the almost uncontrollable predicate
calculus theorem provers that have been available.
Putting information that humans represent as facts in the
form of program has led to excessively specialized systems.
Thus it has always seemed to me that the solution would
eventually involve theorem proving problem solvers that
were controlled by referring to declarative meta-information.
Others have independently come to this conclusion but its
realization has been difficult.
The most serious current attempt along these lines is Michael
Genesereth's MRS.
While I have described this rationalist approach in
terms of my own work, many of the concepts in similar
or variant form have been developed independently by other
people.
Akin to this viewpoint is Allen Newell's (1980) notion
of the "logic level" and the ideas of my "Ascribing Mental
Qualities to Machines" (1980) and Daniel Dennett's "intensional stance".
These notions all take the view that beliefs and goals may
legitimately be ascribed to physical systems independently
of whether sentences in some language are explicitly represented.
The ascription is legitimate when certain minimal properties
of the concepts are realized and useful when it helps understand
interesting aspects of the structure, state or behavior of the system.
These ascriptions are piecemeal, and do not require anything
like the full set of properties of the human mind. While
all of us believe that eventually the full set of human
mental properties will be understood and realized in
computer programs, no-one currently claims to understand
what they all are.
Now I shall return to the reasoning program approach.
It has been modified in various ways. The most important
modification is the addition of non-monotonic reasoning since
the late 1970s. In some sense non-monotonic reasoning was
already anticipated in the 1958 proposal, because the ability
to observe its own consciousness could generate sentences
that sentences of a certain kind did not exist in consciousness,
and such sentences could be realized deductively. However,
my attempts to work this out at that time merely confused me.
Non-monotonic reasoning has been proposed both at the
program level and at the logical level. At the program level
we have Jon Doyle's TMS and the more recent ATMS of Johann de Kleer.
Less systematic approaches to default reasoning have been included
in many programs.
At the logic level we have my circumscription, the McDermott
and Doyle non-monotonic logic and Reiter's logic of defaults.
Non-monotonic reasoning is important in solving some of
the problems posed by the gang of three.